Goto

Collaborating Authors

 julius caesar


3 New Tricks to Try With Google Gemini Live After Its Latest Major Upgrade

WIRED

Google's AI is now even smarter, and more versatile. Gemini Live is the more conversational, natural language way of interacting with the Google Gemini AI bot using your voice. The idea is you chat with it like you would chat with a friend, interruptions and all, even if the actual answers are the same as you'd get from typing your queries into Gemini as normal. Now, about a year and a half after its debut, Gemini Live has been given what Google is describing as its "biggest update ever." The update makes the Gemini Live mode even more natural and even more conversational than before, with a better understanding of tone, nuance, pronunciation, and rhythm.


40,000 Roman-era coins discovered in French village

Popular Science

The town was important to the Celtic Mediomatrici tribe before it was conquered by Julius Caesar. Breakthroughs, discoveries, and DIY tips sent every weekday. Archeologists recently discovered over 40,000 Roman-era coins during a dig in a French village. The treasure trove of ancient coins were found in three ceramic storage vessels that had been buried between 1,700 and 1,800 years ago. The team from the National Institute for Preventive Archaeological Research (INRAP) was digging in the village of Senon in northeastern France, roughly 60 miles from the Luxembourg border.


Large Language Models are Skeptics: False Negative Problem of Input-conflicting Hallucination

Song, Jongyoon, Yu, Sangwon, Yoon, Sungroh

arXiv.org Artificial Intelligence

In this paper, we identify a new category of bias that induces input-conflicting hallucinations, where large language models (LLMs) generate responses inconsistent with the content of the input context. This issue we have termed the false negative problem refers to the phenomenon where LLMs are predisposed to return negative judgments when assessing the correctness of a statement given the context. In experiments involving pairs of statements that contain the same information but have contradictory factual directions, we observe that LLMs exhibit a bias toward false negatives. Specifically, the model presents greater overconfidence when responding with False. Furthermore, we analyze the relationship between the false negative problem and context and query rewriting and observe that both effectively tackle false negatives in LLMs.


Character-LLM: A Trainable Agent for Role-Playing

Shao, Yunfan, Li, Linyang, Dai, Junqi, Qiu, Xipeng

arXiv.org Artificial Intelligence

Large language models (LLMs) can be used to serve as agents to simulate human behaviors, given the powerful ability to understand human instructions and provide high-quality generated texts. Such ability stimulates us to wonder whether LLMs can simulate a person in a higher form than simple human behaviors. Therefore, we aim to train an agent with the profile, experience, and emotional states of a specific person instead of using limited prompts to instruct ChatGPT API. In this work, we introduce Character-LLM that teach LLMs to act as specific people such as Beethoven, Queen Cleopatra, Julius Caesar, etc. Our method focuses on editing profiles as experiences of a certain character and training models to be personal simulacra with these experiences. To assess the effectiveness of our approach, we build a test playground that interviews trained agents and evaluates whether the agents \textit{memorize} their characters and experiences. Experimental results show interesting observations that help build future simulacra of humankind.


Knowledge Sanitization of Large Language Models

Ishibashi, Yoichi, Shimodaira, Hidetoshi

arXiv.org Artificial Intelligence

We explore a knowledge sanitization approach to mitigate the privacy concerns associated with large language models (LLMs). LLMs trained on a large corpus of Web data can memorize and potentially reveal sensitive or confidential information, raising critical security concerns. Our technique fine-tunes these models, prompting them to generate harmless responses such as ``I don't know'' when queried about specific information. Experimental results in a closed-book question-answering task show that our straightforward method not only minimizes particular knowledge leakage but also preserves the overall performance of LLM. These two advantages strengthen the defense against extraction attacks and reduces the emission of harmful content such as hallucinations.


LLaMA: Open and Efficient Foundation Language Models

Touvron, Hugo, Lavril, Thibaut, Izacard, Gautier, Martinet, Xavier, Lachaux, Marie-Anne, Lacroix, Timothée, Rozière, Baptiste, Goyal, Naman, Hambro, Eric, Azhar, Faisal, Rodriguez, Aurelien, Joulin, Armand, Grave, Edouard, Lample, Guillaume

arXiv.org Artificial Intelligence

We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets. In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla-70B and PaLM-540B. We release all our models to the research community.


Incorporating External Knowledge to Enhance Tabular Reasoning

Neeraja, J., Gupta, Vivek, Srikumar, Vivek

arXiv.org Artificial Intelligence

Reasoning about tabular information presents unique challenges to modern NLP approaches which largely rely on pre-trained contextualized embeddings of text. In this paper, we study these challenges through the problem of tabular natural language inference. We propose easy and effective modifications to how information is presented to a model for this task. We show via systematic experiments that these strategies substantially improve tabular inference performance.


What Does Julius Caesar Have to Do with AI?

#artificialintelligence

The fault is in our human condition, not our stars. Power, politics, ethics and storytelling are part of Shakespeare's Julius Caesar. Around the Globe, tea from London to Kyoto inspires socio-cultural values, traditions, and connections that have nothing to do with tea ceremony and everything to do with ethics and forgotten values we need to consider in a world developing AI. It's not enough to seek the integration of robotics, cognitive systems and machine learning. Devoting a year to listening to global perspectives and occupations using AI around the world provided new insight in defense with an F-22 pilot, to storytelling Pixar Animation studios, to healthcare for children, material sciences, and university mechanical engineering, seeing connections across seemingly dissimilar domains.

  Country: Asia > Japan > Honshū > Kansai > Kyoto Prefecture > Kyoto (0.32)
  Industry:

This Is the Tech That Will Make Learning as Addictive as Video Games

#artificialintelligence

Learning needs to be less like memorization, and more like…Angry Birds. Half of school dropouts name boredom as the number one reason they left. The post is about why the future of education will be about flipping our current model on its head and about how key exponential technologies like AI, VR and gamification are going to drive a revolution in education. In the traditional education system, you start at an "A," and every time you get something wrong, your score gets lower and lower. You start with zero, and every time you come up with something right, your score gets higher and higher. It completely flips the way we currently learn, and it's addictively fun.


This Is the Tech That Will Make Learning as Addictive as Video Games

#artificialintelligence

Learning needs to be less like memorization, and more like…Angry Birds. Half of school dropouts name boredom as the number one reason they left. The post is about why the future of education will be about flipping our current model on its head and about how key exponential technologies like AI, VR and gamification are going to drive a revolution in education. In the traditional education system, you start at an "A," and every time you get something wrong, your score gets lower and lower. You start with zero, and every time you come up with something right, your score gets higher and higher. It completely flips the way we currently learn, and it's addictively fun.